Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Sci Rep ; 14(1): 8334, 2024 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-38594295

RESUMO

Fluorine-18-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET)/computed tomography (CT) is widely used for the detection, diagnosis, and clinical decision-making in oncological diseases. However, in daily medical practice, it is often difficult to make clinical decisions because of physiological FDG uptake or cancers with poor FDG uptake. False negative clinical diagnoses of malignant lesions are critical issues that require attention. In this study, Vision Transformer (ViT) was used to automatically classify 18F-FDG PET/CT slices as benign or malignant. This retrospective study included 18F-FDG PET/CT data of 207 (143 malignant and 64 benign) patients from a medical institute to train and test our models. The ViT model achieved an area under the receiver operating characteristic curve (AUC) of 0.90 [95% CI 0.89, 0.91], which was superior to the baseline Convolutional Neural Network (CNN) models (EfficientNet, 0.87 [95% CI 0.86, 0.88], P < 0.001; DenseNet, 0.87 [95% CI 0.86, 0.88], P < 0.001). Even when FDG uptake was low, ViT produced an AUC of 0.81 [95% CI 0.77, 0.85], which was higher than that of the CNN (DenseNet, 0.65 [95% CI 0.59, 0.70], P < 0.001). We demonstrated the clinical value of ViT by showing its sensitive analysis of easy-to-miss cases of oncological diseases.


Assuntos
Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Compostos Radiofarmacêuticos , Estudos Retrospectivos , Tomografia por Emissão de Pósitrons/métodos
2.
Jpn J Radiol ; 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38551771

RESUMO

PURPOSE: To propose a five-point scale for radiology report importance called Report Importance Category (RIC) and to compare the performance of natural language processing (NLP) algorithms in assessing RIC using head computed tomography (CT) reports written in Japanese. MATERIALS AND METHODS: 3728 Japanese head CT reports performed at Osaka University Hospital in 2020 were included. RIC (category 0: no findings, category 1: minor findings, category 2: routine follow-up, category 3: careful follow-up, and category 4: examination or therapy) was established based not only on patient severity but also on the novelty of the information. The manual assessment of RIC for the reports was performed under the consensus of two out of four neuroradiologists. The performance of four NLP models for classifying RIC was compared using fivefold cross-validation: logistic regression, bidirectional long-short-term memory (BiLSTM), general bidirectional encoder representations of transformers (general BERT), and domain-specific BERT (BERT for medical domain). RESULTS: The proportion of each RIC in the whole data set was 15.0%, 26.7%, 44.2%, 7.7%, and 6.4%, respectively. Domain-specific BERT showed the highest accuracy (0.8434 ± 0.0063) in assessing RIC and significantly higher AUC in categories 1 (0.9813 ± 0.0011), 2 (0.9492 ± 0.0045), 3 (0.9637 ± 0.0050), and 4 (0.9548 ± 0.0074) than the other models (p < .05). Analysis using layer-integrated gradients showed that the domain-specific BERT model could detect important words, such as disease names in reports. CONCLUSIONS: Domain-specific BERT has superiority over the other models in assessing our newly proposed criteria called RIC of head CT radiology reports. The accumulation of similar and further studies of has a potential to contribute to medical safety by preventing missed important findings by clinicians.

3.
Comput Biol Med ; 172: 108197, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38452472

RESUMO

BACKGROUND: Health-related patient-reported outcomes (HR-PROs) are crucial for assessing the quality of life among individuals experiencing low back pain. However, manual data entry from paper forms, while convenient for patients, imposes a considerable tallying burden on collectors. In this study, we developed a deep learning (DL) model capable of automatically reading these paper forms. METHODS: We employed the Japanese Orthopaedic Association Back Pain Evaluation Questionnaire, a globally recognized assessment tool for low back pain. The questionnaire comprised 25 low back pain-related multiple-choice questions and three pain-related visual analog scales (VASs). We collected 1305 forms from an academic medical center as the training set, and 483 forms from a community medical center as the test set. The performance of our DL model for multiple-choice questions was evaluated using accuracy as a categorical classification task. The performance for VASs was evaluated using the correlation coefficient and absolute error as regression tasks. RESULT: In external validation, the mean accuracy of the categorical questions was 0.997. When outputs for categorical questions with low probability (threshold: 0.9996) were excluded, the accuracy reached 1.000 for the remaining 65 % of questions. Regarding the VASs, the average of the correlation coefficients was 0.989, with the mean absolute error being 0.25. CONCLUSION: Our DL model demonstrated remarkable accuracy and correlation coefficients when automatic reading paper-based HR-PROs during external validation.


Assuntos
Aprendizado Profundo , Dor Lombar , Ortopedia , Humanos , Dor Lombar/diagnóstico , Dor Lombar/terapia , Qualidade de Vida , Japão , Dor nas Costas , Inquéritos e Questionários
4.
Jpn J Radiol ; 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38413550

RESUMO

PURPOSE: To predict solid and micropapillary components in lung invasive adenocarcinoma using radiomic analyses based on high-spatial-resolution CT (HSR-CT). MATERIALS AND METHODS: For this retrospective study, 64 patients with lung invasive adenocarcinoma were enrolled. All patients were scanned by HSR-CT with 1024 matrix. A pathologist evaluated subtypes (lepidic, acinar, solid, micropapillary, or others). Total 61 radiomic features in the CT images were calculated using our modified texture analysis software, then filtered and minimized by least absolute shrinkage and selection operator (LASSO) regression to select optimal radiomic features for predicting solid and micropapillary components in lung invasive adenocarcinoma. Final data were obtained by repeating tenfold cross-validation 10 times. Two independent radiologists visually predicted solid or micropapillary components on each image of the 64 nodules with and without using the radiomics results. The quantitative values were analyzed with logistic regression models. The receiver operating characteristic curves were generated to predict of solid and micropapillary components. P values < 0.05 were considered significant. RESULTS: Two features (Coefficient Variation and Entropy) were independent indicators associated with solid and micropapillary components (odds ratio, 30.5 and 11.4; 95% confidence interval, 5.1-180.5 and 1.9-66.6; and P = 0.0002 and 0.0071, respectively). The area under the curve for predicting solid and micropapillary components was 0.902 (95% confidence interval, 0.802 to 0.962). The radiomics results significantly improved the accuracy and specificity of the prediction of the two radiologists. CONCLUSION: Two texture features (Coefficient Variation and Entropy) were significant indicators to predict solid and micropapillary components in lung invasive adenocarcinoma.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38238492

RESUMO

PURPOSE: A large number of research has been conducted on the classification of medical images using deep learning. The thyroid tissue images can be also classified by cancer types. Deep learning requires a large amount of data, but every medical institution cannot collect sufficient number of data for deep learning. In that case, we can consider a case where a classifier trained at a certain medical institution that has a sufficient number of data is reused at other institutions. However, when using data from multiple institutions, it is necessary to unify the feature distribution because the feature of the data differs due to differences in data acquisition conditions. METHODS: To unify the feature distribution, the data from Institution T are transformed to have the closer distribution to that from Institution S by applying a domain transformation using semi-supervised CycleGAN. The proposed method enhances CycleGAN considering the feature distribution of classes for making appropriate domain transformation for classification. In addition, to address the problem of imbalanced data with different numbers of data for each cancer type, several methods dealing with imbalanced data are applied to semi-supervised CycleGAN. RESULTS: The experimental results showed that the classification performance was enhanced when the dataset from Institution S was used as training data and the testing dataset from Institution T was classified after applying domain transformation. In addition, focal loss contributed to improving the mean F1 score the best as a method that addresses the class imbalance. CONCLUSION: The proposed method achieved the domain transformation of thyroid tissue images between two domains, where it retained the important features related to the classes across domains and showed the best F1 score with significant differences compared with other methods. In addition, the proposed method was further enhanced by addressing the class imbalance of the dataset.

6.
Artigo em Inglês | MEDLINE | ID: mdl-37943467

RESUMO

PURPOSE: The visualization of an anomaly area is easier in anomaly detection methods that use generative models rather than classification models. However, achieving both anomaly detection accuracy and a clear visualization of anomalous areas is challenging. This study aimed to establish a method that combines both detection accuracy and clear visualization of anomalous areas using a generative adversarial network (GAN). METHODS: In this study, StyleGAN2 with adaptive discriminator augmentation (StyleGAN2-ADA), which can generate high-resolution and high-quality images with limited number of datasets, was used as the image generation model, and pixel-to-style-to-pixel (pSp) encoder was used to convert images into intermediate latent variables. We combined existing methods for training and proposed a method for calculating anomaly scores using intermediate latent variables. The proposed method, which combines these two methods, is called high-quality anomaly GAN (HQ-AnoGAN). RESULTS: The experimental results obtained using three datasets demonstrated that HQ-AnoGAN has equal or better detection accuracy than the existing methods. The results of the visualization of abnormal areas using the generated images showed that HQ-AnoGAN could generate more natural images than the existing methods and was qualitatively more accurate in the visualization of abnormal areas. CONCLUSION: In this study, HQ-AnoGAN comprising StyleGAN2-ADA and pSp encoder was proposed with an optimal anomaly score calculation method. The experimental results show that HQ-AnoGAN can achieve both high abnormality detection accuracy and clear visualization of abnormal areas; thus, HQ-AnoGAN demonstrates significant potential for application in medical imaging diagnosis cases where an explanation of diagnosis is required.

7.
Sci Rep ; 13(1): 19068, 2023 11 04.
Artigo em Inglês | MEDLINE | ID: mdl-37925580

RESUMO

Despite the dedicated research of artificial intelligence (AI) for pathological images, the construction of AI applicable to histopathological tissue subtypes, is limited by insufficient dataset collection owing to disease infrequency. Here, we present a solution involving the addition of supplemental tissue array (TA) images that are adjusted to the tonality of the main data using a cycle-consistent generative adversarial network (CycleGAN) to the training data for rare tissue types. F1 scores of rare tissue types that constitute < 1.2% of the training data were significantly increased by improving recall values after adding color-adjusted TA images constituting < 0.65% of total training patches. The detector also enabled the equivalent discrimination of clinical images from two distinct hospitals and the capability was more increased following color-correction of test data before AI identification (F1 score from 45.2 ± 27.1 to 77.1 ± 10.3, p < 0.01). These methods also classified intraoperative frozen sections, while excessive supplementation paradoxically decreased F1 scores. These results identify strategies for building an AI that preserves the imbalance between training data with large differences in actual disease frequencies, which is important for constructing AI for practical histopathological classification.


Assuntos
Inteligência Artificial , Cafeína , Secções Congeladas , Teste de Histocompatibilidade , Hospitais
8.
J Clin Med ; 12(17)2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37685677

RESUMO

Background: Dual-energy CT has been reported to be useful for differentiating thymic epithelial tumors. The purpose is to evaluate thymic epithelial tumors by using three-dimensional (3D) iodine density histogram texture analysis on dual-energy CT and to investigate the association of extracellular volume fraction (ECV) with the fibrosis of thymic carcinoma. Methods: 42 patients with low-risk thymoma (n = 20), high-risk thymoma (n = 16), and thymic carcinoma (n = 6) were scanned by dual-energy CT. 3D iodine density histogram texture analysis was performed for each nodule on iodine density mapping: Seven texture features (max, min, median, average, standard deviation [SD], skewness, and kurtosis) were obtained. The iodine effect (average on DECT180s-average on unenhanced DECT) and ECV on DECT180s were measured. Tissue fibrosis was subjectively rated by one pathologist on a three-point grade. These quantitative data obtained by examining associations with thymic carcinoma and high-risk thymoma were analyzed with univariate and multivariate logistic regression models (LRMs). The area under the curve (AUC) was calculated by the receiver operating characteristic curves. p values < 0.05 were significant. Results: The multivariate LRM showed that ECV > 21.47% in DECT180s could predict thymic carcinoma (odds ratio [OR], 11.4; 95% confidence interval [CI], 1.18-109; p = 0.035). Diagnostic performance was as follows: Sensitivity, 83.3%; specificity, 69.4%; AUC, 0.76. In high-risk thymoma vs. low-risk thymoma, the multivariate LRM showed that the iodine effect ≤1.31 mg/cc could predict high-risk thymoma (OR, 7; 95% CI, 1.02-39.1; p = 0.027). Diagnostic performance was as follows: Sensitivity, 87.5%; specificity, 50%; AUC, 0.69. Tissue fibrosis significantly correlated with thymic carcinoma (p = 0.026). Conclusions: ECV on DECT180s related to fibrosis may predict thymic carcinoma from thymic epithelial tumors, and the iodine effect on DECT180s may predict high-risk thymoma from thymoma.

9.
iScience ; 26(10): 107900, 2023 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-37766987

RESUMO

We proposed a bimodal artificial intelligence that integrates patient information with images to diagnose spinal cord tumors. Our model combines TabNet, a state-of-the-art deep learning model for tabular data for patient information, and a convolutional neural network for images. As training data, we collected 259 spinal tumor patients (158 for schwannoma and 101 for meningioma). We compared the performance of the image-only unimodal model, table-only unimodal model, bimodal model using a gradient-boosting decision tree, and bimodal model using TabNet. Our proposed bimodal model using TabNet performed best (area under the receiver-operating characteristic curve [AUROC]: 0.91) in the training data and significantly outperformed the physicians' performance. In the external validation using 62 cases from the other two facilities, our bimodal model showed an AUROC of 0.92, proving the robustness of the model. The bimodal analysis using TabNet was effective for differentiating spinal tumors.

10.
Clin Lung Cancer ; 24(6): 541-550, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37407293

RESUMO

INTRODUCTION/BACKGROUND: To evaluate cases of surgically resected pulmonary adenocarcinoma (Ad) with heterogenous ground-glass nodules (HGGNs) or part-solid nodules (PSNs) and to clarify the differences between them, and between invasive adenocarcinoma (IVA) and minimally invasive adenocarcinoma (MIA) + adenocarcinoma in situ (AIS) using grayscale histogram analysis of thin-section computed tomography (TSCT). MATERIALS AND METHODS: 241 patients with pulmonary Ad were retrospectively classified into HGGNs and PSNs on TSCT by three thoracic radiologists. Sixty HGGNs were classified into 17 IVAs, 26 MIAs, and 17 AISs. 181 PSNs were classified into 114 IVAs, 55 MIAs, and 12 AISs. RESULTS: We found significant differences in area (P = 0.0024), relative size of solid component (P <0.0001), circumference (P <0.0001), mean CT value (P <0.0001), standard deviation of the CT value (P <0.0001), maximum CT value (P <0.0001), skewness (P <0.0001), kurtosis (P <0.0001), and entropy (P <0.0001) between HGGNs and PSNs. In HGGNs, we found significant differences in relative size of solid component (P <0.0001), mean CT value (P = 0.0005), standard deviation of CT value (P = 0.0071), maximum CT value (P = 0.0237), and skewness (P = 0.0027) between IVAs and MIA+AIS lesions. In PSNs, we found significant differences in area (P = 0.0029), relative size of solid component (P = 0.0003), circumference (P = 0.0004), mean CT value (P = 0.0011), skewness (P = 0.0009), and entropy (P = 0.0002) between IVAs and the MIA+AIS lesions. CONCLUSION: Quantitative evaluations using grayscale histogram analysis can clearly distinguish between HGGNs and PSNs, and may be useful for estimating the pathology of such lesions.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/cirurgia , Neoplasias Pulmonares/patologia , Estudos Retrospectivos , Invasividade Neoplásica , Adenocarcinoma de Pulmão/diagnóstico por imagem , Adenocarcinoma de Pulmão/cirurgia , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/cirurgia , Adenocarcinoma/patologia , Tomografia Computadorizada por Raios X/métodos
11.
iScience ; 26(7): 107086, 2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37434699

RESUMO

In this study, we present a self-supervised learning (SSL)-based model that enables anatomical structure-based unsupervised anomaly detection (UAD). The model employs an anatomy-aware pasting (AnatPaste) augmentation tool that uses a threshold-based lung segmentation pretext task to create anomalies in normal chest radiographs used for model pretraining. These anomalies are similar to real anomalies and help the model recognize them. We evaluate our model using three open-source chest radiograph datasets. Our model exhibits area under curves of 92.1%, 78.7%, and 81.9%, which are the highest among those of existing UAD models. To the best of our knowledge, this is the first SSL model to employ anatomical information from segmentation as a pretext task. The performance of AnatPaste shows that incorporating anatomical information into SSL can effectively improve accuracy.

12.
Radiol Artif Intell ; 5(2): e220097, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37035437

RESUMO

Purpose: To assess whether transfer learning with a bidirectional encoder representations from transformers (BERT) model, pretrained on a clinical corpus, can perform sentence-level anatomic classification of free-text radiology reports, even for anatomic classes with few positive examples. Materials and Methods: This retrospective study included radiology reports of patients who underwent whole-body PET/CT imaging from December 2005 to December 2020. Each sentence in these reports (6272 sentences) was labeled by two annotators according to body part ("brain," "head & neck," "chest," "abdomen," "limbs," "spine," or "others"). The BERT-based transfer learning approach was compared with two baseline machine learning approaches: bidirectional long short-term memory (BiLSTM) and the count-based method. Area under the precision-recall curve (AUPRC) and area under the receiver operating characteristic curve (AUC) were computed for each approach, and AUCs were compared using the DeLong test. Results: The BERT-based approach achieved a macro-averaged AUPRC of 0.88 for classification, outperforming the baselines. AUC results for BERT were significantly higher than those of BiLSTM for all classes and those of the count-based method for the "brain," "chest," "abdomen," and "others" classes (P values < .025). AUPRC results for BERT were superior to those of baselines even for classes with few labeled training data (brain: BERT, 0.95, BiLSTM, 0.11, count based, 0.41; limbs: BERT, 0.74, BiLSTM, 0.28, count based, 0.46; spine: BERT, 0.82, BiLSTM, 0.53, count based, 0.69). Conclusion: The BERT-based transfer learning approach outperformed the BiLSTM and count-based approaches in sentence-level anatomic classification of free-text radiology reports, even for anatomic classes with few labeled training data.Keywords: Anatomy, Comparative Studies, Technology Assessment, Transfer Learning Supplemental material is available for this article. © RSNA, 2023.

13.
Eur Radiol ; 33(1): 348-359, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35751697

RESUMO

OBJECTIVES: To compare the performance of radiologists in characterizing and diagnosing pulmonary nodules/masses with and without deep learning (DL)-based computer-aided diagnosis (CAD). METHODS: We studied a total of 101 nodules/masses detected on CT performed between January and March 2018 at Osaka University Hospital (malignancy: 55 cases). SYNAPSE SAI Viewer V1.4 was used to analyze the nodules/masses. In total, 15 independent radiologists were grouped (n = 5 each) according to their experience: L (< 3 years), M (3-5 years), and H (> 5 years). The likelihoods of 15 characteristics, such as cavitation and calcification, and the diagnosis (malignancy) were evaluated by each radiologist with and without CAD, and the assessment time was recorded. The AUCs compared with the reference standard set by two board-certified chest radiologists were analyzed following the multi-reader multi-case method. Furthermore, interobserver agreement was compared using intraclass correlation coefficients (ICCs). RESULTS: The AUCs for ill-defined boundary, irregular margin, irregular shape, calcification, pleural contact, and malignancy in all 15 radiologists, irregular margin and irregular shape in L and ill-defined boundary and irregular margin in M improved significantly (p < 0.05); no significant improvements were found in H. L showed the greatest increase in the AUC for malignancy (not significant). The ICCs improved in all groups and for nearly all items. The median assessment time was not prolonged by CAD. CONCLUSIONS: DL-based CAD helps radiologists, particularly those with < 5 years of experience, to accurately characterize and diagnose pulmonary nodules/masses, and improves the reproducibility of findings among radiologists. KEY POINTS: • Deep learning-based computer-aided diagnosis improves the accuracy of characterizing nodules/masses and diagnosing malignancy, particularly by radiologists with < 5 years of experience. • Computer-aided diagnosis increases not only the accuracy but also the reproducibility of the findings across radiologists.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Nódulo Pulmonar Solitário , Humanos , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Radiologistas , Diagnóstico por Computador/métodos , Computadores , Neoplasias Pulmonares/diagnóstico por imagem , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/diagnóstico por imagem
14.
J Orthop Sci ; 28(6): 1392-1399, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36163118

RESUMO

BACKGROUND: The Japanese Orthopaedic Association National Registry (JOANR) was recently launched in Japan and is expected to improve the quality of medical care. However, surgeons must register ten detailed features for total hip arthroplasty, which is labor intensive. One possible solution is to use a system that automatically extracts information about the surgeries. Although it is not easy to extract features from an operative record consisting of free-text data, natural language processing has been used to extract features from operative records. This study aimed to evaluate the best natural language processing method for building a system that automatically detects some elements in the JOANR from the operative records of total hip arthroplasty. METHODS: We obtained operative records of total hip arthroplasty (n = 2574) in three hospitals and targeted two items: surgical approach and fixation technique. We compared the accuracy of three natural language processing methods: rule-based algorithms, machine learning, and bidirectional encoder representations from transformers (BERT). RESULTS: In the surgical approach task, the accuracy of BERT was superior to that of the rule-based algorithm (99.6% vs. 93.6%, p < 0.001), comparable to machine learning. In the fixation technique task, the accuracy of BERT was superior to the rule-based algorithm and machine learning (96% vs. 74%, p < 0.0001 and 94%, p = 0.0004). CONCLUSIONS: BERT is the most appropriate method for building a system that automatically detects the surgical approach and fixation technique.


Assuntos
Inteligência Artificial , Ortopedia , Humanos , Algoritmos , Sistema de Registros , Japão , Procedimentos Cirúrgicos Operatórios , Registros Médicos , Aprendizado de Máquina
15.
Sci Rep ; 12(1): 15732, 2022 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-36130962

RESUMO

Cervical sagittal alignment is an essential parameter for the evaluation of spine disorders. Manual measurement is time-consuming and burdensome to measurers. Artificial intelligence (AI) in the form of convolutional neural networks has begun to be used to measure x-rays. This study aimed to develop AI for automated measurement of lordosis on lateral cervical x-rays. We included 4546 cervical x-rays from 1674 patients. For all x-rays, the caudal endplates of C2 and C7 were labeled based on consensus among well-experienced spine surgeons, the data for which were used as ground truth. This ground truth was split into training data and test data, and the AI model learned the training data. The absolute error of the AI measurements relative to the ground truth for 4546 x-rays was determined by fivefold cross-validation. Additionally, the absolute error of AI measurements was compared with the error of other 2 surgeons' measurements on 415 radiographs of 168 randomly selected patients. In fivefold cross-validation, the absolute error of the AI model was 3.3° in the average and 2.2° in the median. For comparison of other surgeons, the mean absolute error for measurement of 168 patients was 3.1° ± 3.4° for the AI model, 3.9° ± 3.4° for Surgeon 1, and 3.8° ± 4.7° for Surgeon 2. The AI model had a significantly smaller error than Surgeon 1 and Surgeon 2 (P = 0.002 and 0.036). This algorithm is available at ( https://ykszk.github.io/c2c7demo/ ). The AI model measured cervical spine alignment with better accuracy than surgeons. AI can assist in routine medical care and can be helpful in research that measures large numbers of images. However, because of the large errors in rare cases such as highly deformed ones, AI may, in principle, be limited to assisting humans.


Assuntos
Lordose , Inteligência Artificial , Vértebras Cervicais/diagnóstico por imagem , Vértebras Cervicais/cirurgia , Humanos , Lordose/diagnóstico por imagem , Lordose/cirurgia , Pescoço , Radiografia
16.
Sci Rep ; 12(1): 12176, 2022 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-35842451

RESUMO

Virtual thin-slice (VTS) technique is a generative adversarial network-based algorithm that can generate virtual 1-mm-thick CT images from images of 3-10-mm thickness. We evaluated the performance of VTS technique for assessment of the spine. VTS was applied to 4-mm-thick CT images of 73 patients, and the visibility of intervertebral spaces was evaluated on the 4-mm-thick and VTS images. The heights of vertebrae measured on sagittal images reconstructed from the 4-mm-thick images and VTS images were compared with those measured on images reconstructed from 1-mm-thick images. Diagnostic performance for the detection of compression fractures was also compared. The intervertebral spaces were significantly more visible on the VTS images than on the 4-mm-thick images (P < 0.001). The absolute value of the measured difference in mean vertebral height between the VTS and 1-mm-thick images was smaller than that between the 4-mm-thick and 1-mm-thick images (P < 0.01-0.54). The diagnostic performance of the VTS images for detecting compression fracture was significantly lower than that of the 4-mm-thick images for one reader (P = 0.02). VTS technique enabled the identification of each vertebral body, and enabled accurate measurement of vertebral height. However, this technique is not suitable for diagnosing compression fractures.


Assuntos
Fraturas por Compressão , Fraturas da Coluna Vertebral , Algoritmos , Fraturas por Compressão/diagnóstico por imagem , Humanos , Fraturas da Coluna Vertebral/diagnóstico por imagem , Coluna Vertebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
17.
Front Artif Intell ; 5: 782225, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35252849

RESUMO

In computer-aided diagnosis systems for lung cancer, segmentation of lung nodules is important for analyzing image features of lung nodules on computed tomography (CT) images and distinguishing malignant nodules from benign ones. However, it is difficult to accurately and robustly segment lung nodules attached to the chest wall or with ground-glass opacities using conventional image processing methods. Therefore, this study aimed to develop a method for robust and accurate three-dimensional (3D) segmentation of lung nodule regions using deep learning. In this study, a nested 3D fully connected convolutional network with residual unit structures was proposed, and designed a new loss function. Compared with annotated images obtained under the guidance of a radiologist, the Dice similarity coefficient (DS) and intersection over union (IoU) were 0.845 ± 0.008 and 0.738 ± 0.011, respectively, for 332 lung nodules (lung adenocarcinoma) obtained from 332 patients. On the other hand, for 3D U-Net and 3D SegNet, the DS was 0.822 ± 0.009 and 0.786 ± 0.011, respectively, and the IoU was 0.711 ± 0.011 and 0.660 ± 0.012, respectively. These results indicate that the proposed method is significantly superior to well-known deep learning models. Moreover, we compared the results obtained from the proposed method with those obtained from conventional image processing methods, watersheds, and graph cuts. The DS and IoU results for the watershed method were 0.628 ± 0.027 and 0.494 ± 0.025, respectively, and those for the graph cut method were 0.566 ± 0.025 and 0.414 ± 0.021, respectively. These results indicate that the proposed method is significantly superior to conventional image processing methods. The proposed method may be useful for accurate and robust segmentation of lung nodules to assist radiologists in the diagnosis of lung nodules such as lung adenocarcinoma on CT images.

18.
Ann Clin Epidemiol ; 4(4): 110-119, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-38505255

RESUMO

BACKGROUND: We aimed to develop and externally validate a novel machine learning model that can classify CT image findings as positive or negative for SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR). METHODS: We used 2,928 images from a wide variety of case-control type data sources for the development and internal validation of the machine learning model. A total of 633 COVID-19 cases and 2,295 non-COVID-19 cases were included in the study. We randomly divided cases into training and tuning sets at a ratio of 8:2. For external validation, we used 893 images from 740 consecutive patients at 11 acute care hospitals suspected of having COVID-19 at the time of diagnosis. The dataset included 343 COVID-19 patients. The reference standard was RT-PCR. RESULTS: In external validation, the sensitivity and specificity of the model were 0.869 and 0.432, at the low-level cutoff, 0.724 and 0.721, at the high-level cutoff. Area under the receiver operating characteristic was 0.76. CONCLUSIONS: Our machine learning model exhibited a high sensitivity in external validation datasets and may assist physicians to rule out COVID-19 diagnosis in a timely manner at emergency departments. Further studies are warranted to improve model specificity.

19.
Int J Comput Assist Radiol Surg ; 16(11): 1925-1935, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34661818

RESUMO

PURPOSE: The performance of deep learning may fluctuate depending on the imaging devices and settings. Although domain transformation such as CycleGAN for normalizing images is useful, CycleGAN does not use information on the disease classes. Therefore, we propose a semi-supervised CycleGAN with an additional classification loss to transform images suitable for the diagnosis. The method is evaluated by opacity classification of chest CT. METHODS: (1) CT images taken at two hospitals (source and target domains) are used. (2) A classifier is trained on the target domain. (3) Class labels are given to a small number of source domain images for semi-supervised learning. (4) The source domain images are transformed to the target domain. (5) A classification loss of the transformed images with class labels is calculated. RESULTS: The proposed method showed an F-measure of 0.727 in the domain transformation from hospital A to B, and 0.745 in that from hospital B to A, where significant differences are between the proposed method and the other three methods. CONCLUSIONS: The proposed method not only transforms the appearance of the images but also retains the features being important to classify opacities, and shows the best precision, recall, and F-measure.


Assuntos
Processamento de Imagem Assistida por Computador , Pneumopatias , Humanos , Pneumopatias/diagnóstico por imagem , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X
20.
Eur Radiol ; 31(4): 1978-1986, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33011879

RESUMO

OBJECTIVES: To compare diagnostic performance for pulmonary invasive adenocarcinoma among radiologists with and without three-dimensional convolutional neural network (3D-CNN). METHODS: Enrolled were 285 patients with adenocarcinoma in situ (AIS, n = 75), minimally invasive adenocarcinoma (MIA, n = 58), and invasive adenocarcinoma (IVA, n = 152). A 3D-CNN model was constructed with seven convolution-pooling and two max-pooling layers and fully connected layers, in which batch normalization, residual connection, and global average pooling were used. Only the flipping process was performed for augmentation. The output layer comprised two nodes for two conditions (AIS/MIA and IVA) according to prognosis. Diagnostic performance of the 3D-CNN model in 285 patients was calculated using nested 10-fold cross-validation. In 90 of 285 patients, results from each radiologist (R1, R2, and R3; with 9, 14, and 26 years of experience, respectively) with and without the 3D-CNN model were statistically compared. RESULTS: Without the 3D-CNN model, accuracy, sensitivity, and specificity of the radiologists were as follows: R1, 70.0%, 52.1%, and 90.5%; R2, 72.2%, 75%, and 69%; and R3, 74.4%, 89.6%, and 57.1%, respectively. With the 3D-CNN model, accuracy, sensitivity, and specificity of the radiologists were as follows: R1, 72.2%, 77.1%, and 66.7%; R2, 74.4%, 85.4%, and 61.9%; and R3, 74.4%, 93.8%, and 52.4%, respectively. Diagnostic performance of each radiologist with and without the 3D-CNN model had no significant difference (p > 0.88), but the accuracy of R1 and R2 was significantly higher with than without the 3D-CNN model (p < 0.01). CONCLUSIONS: The 3D-CNN model can support a less-experienced radiologist to improve diagnostic accuracy for pulmonary invasive adenocarcinoma without deteriorating any diagnostic performances. KEY POINTS: • The 3D-CNN model is a non-invasive method for predicting pulmonary invasive adenocarcinoma in CT images with high sensitivity. • Diagnostic accuracy by a less-experienced radiologist was better with the 3D-CNN model than without the model.


Assuntos
Adenocarcinoma de Pulmão , Neoplasias Pulmonares , Adenocarcinoma de Pulmão/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Radiologistas , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...